61 research outputs found

    Investigation of stakeholders commitment to information security awaremess programs

    Full text link
    Organisations have become increasingly dependent on technology in order to compete in their respective markets. As IT technology advances at a rapid pace, so does its complexity, giving rise to new IT security vulnerabilities and methods of attack. Even though the human factors have been recognized to have a crucial role in information security management, the effects of weakness of will and lack of commitment on the stakeholders (i.e., employers and employees) parts has never been factored into the design and delivery of awareness programs. To this end, this paper investigates the impacts of the availability of awareness programs and end-user drive and lack of commitment to information security awareness program design, delivery and success.<br /

    Guest editorial : In Journal of networks, v.7 n.3

    Get PDF
    Networking of computing devices has been going through rapid evolution and thus continuing to be an ever expanding area of importance in recent years. New technologies, protocols, services and usage patterns have contributed to the major research interests in this area of computer science. The current special issue is an effort to bring forward some of these interesting developments that are being pursued by researchers at present in different parts of the globe. Our objective is to provide the readership with some insight into the latest innovations in computer networking through this. This Special Issue presents selected papers from the thirteenth conference of the series (ICCIT 2010) held during December 23-25, 2010 at the Ahsanullah University of Science and Technology. The first ICCIT was held in Dhaka, Bangladesh, in 1998. Since then the conference has grown to be one of the largest computer and IT related research conferences in the South Asian region, with participation of academics and researchers from many countries around the world. Starting in 2008 the proceedings of ICCIT are included in IEEExplore. In 2010, a total of 410 full papers were submitted to the conference of which 136 were accepted after reviews conducted by an international program committee comprising 81 members from 16 countries. This was tantamount to an acceptance rate of 33%. From these 136 papers, 14 highly ranked manuscripts were invited for this Special Issue. The authors were advised to enhance their papers significantly and submit them to undergo review for suitability of inclusion into this publication. Of those, eight papers survived the review process and have been selected for inclusion in this Special Issue. The authors of these papers represent academic and/or research institutions from Australia, Bangladesh, Japan, Korea and USA. These papers address issues concerning different domains of networks namely, optical fiber communication, wireless and interconnection networks, issues related to networking hardware and software and network mobility. The paper titled &ldquo;Virtualization in Wireless Sensor Network: Challenges and Opportunities&rdquo; argues in favor of bringing in different heterogeneous sensors under a common virtual framework so that the issues like flexibility, diversity, management and security can be handled practically. The authors Md. Motaharul Islam and Eui-Num Huh propose an architecture for sensor virtualization. They also present the current status and the challenges and opportunities for further research on the topic. The manuscript &ldquo;Effect of Polarization Mode Dispersion on the BER Performance of Optical CDMA&rdquo; deals with impact of polarization mode dispersion on the bit error rate performance of direct sequence optical code division multiple access. The authors, Md. Jahedul Islam and Md. Rafiqul Islam present an analytical approach toward determining the impact of different performance parameters. The authors show that the bit error rate performance improves significantly by the third order polarization mode dispersion than its first or second order counterparts. The authors Md. Shohrab Hossain, Mohammed Atiquzzaman and William Ivancic of the paper &ldquo;Cost and Efficiency Analysis of NEMO Protocol Entities&rdquo; present an analytical model for estimating the cost incurred by major mobility entities of a NEMO. The authors define a new metric for cost calculation in the process. Both the newly developed metric and the analytical model are likely to be useful to network engineers in estimating the resource requirement at the key entities while designing such a network. The article titled &ldquo;A Highly Flexible LDPC Decoder using Hierarchical Quasi-Cyclic Matrix with Layered Permutation&rdquo; deals with Low Density Parity Check decoders. The authors, Vikram Arkalgud Chandrasetty and Syed Mahfuzul Aziz propose a novel multi-level structured hierarchical matrix approach for generating codes of different lengths flexibly depending upon the requirement of the application. The manuscript &ldquo;Analysis of Performance Limitations in Fiber Bragg Grating Based Optical Add-Drop Multiplexer due to Crosstalk&rdquo; has been contributed by M. Mahiuddin and M. S. Islam. The paper proposes a new method of handling crosstalk with a fiber Bragg grating based optical add drop multiplexer (OADM). The authors show with an analytical model that different parameters improve using their proposed OADM. The paper &ldquo;High Performance Hierarchical Torus Network Under Adverse Traffic Patterns&rdquo; addresses issues related to hierarchical torus network (HTN) under adverse traffic patterns. The authors, M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi observe that dynamic communication performance of an HTN under adverse traffic conditions has not yet been addressed. The authors evaluate the performance of HTN for comparison with some other relevant networks. It is interesting to see that HTN outperforms these counterparts in terms of throughput and data transfer under adverse traffic. The manuscript titled &ldquo;Dynamic Communication Performance Enhancement in Hierarchical Torus Network by Selection Algorithm&rdquo; has been contributed by M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi. The authors introduce three simple adapting routing algorithms for efficient use of physical links and virtual channels in hierarchical torus network. The authors show that their approaches yield better performance for such networks. The final title &ldquo;An Optimization Technique for Improved VoIP Performance over Wireless LAN&rdquo; has been contributed by five authors, namely, Tamal Chakraborty, Atri Mukhopadhyay, Suman Bhunia, Iti Saha Misra and Salil K. Sanyal. The authors propose an optimization technique for configuring the parameters of the access points. In addition, they come up with an optimization mechanism in order to tune the threshold of active queue management system appropriately. Put together, the mechanisms improve the VoIP performance significantly under congestion. Finally, the Guest Editors would like to express their sincere gratitude to the 15 reviewers besides the guest editors themselves (Khalid M. Awan, Mukaddim Pathan, Ben Townsend, Morshed Chowdhury, Iftekhar Ahmad, Gour Karmakar, Shivali Goel, Hairulnizam Mahdin, Abdullah A Yusuf, Kashif Sattar, A.K.M. Azad, F. Rahman, Bahman Javadi, Abdelrahman Desoky, Lenin Mehedy) from several countries (Australia, Bangladesh, Japan, Pakistan, UK and USA) who have given immensely to this process. They have responded to the Guest Editors in the shortest possible time and dedicated their valuable time to ensure that the Special Issue contains high-quality papers with significant novelty and contributions

    Decision trees and multi-level ensemble classifiers for neurological diagnostics

    Full text link
    Cardiac autonomic neuropathy (CAN) is a well known complication of diabetes leading to impaired regulation of blood pressure and heart rate, and increases the risk of cardiac associated mortality of diabetes patients. The neurological diagnostics of CAN progression is an important problem that is being actively investigated. This paper uses data collected as part of a large and unique Diabetes Screening Complications Research Initiative (DiScRi) in Australia with data from numerous tests related to diabetes to classify CAN progression. The present paper is devoted to recent experimental investigations of the effectiveness of applications of decision trees, ensemble classifiers and multi-level ensemble classifiers for neurological diagnostics of CAN. We present the results of experiments comparing the effectiveness of ADTree, J48, NBTree, RandomTree, REPTree and SimpleCart decision tree classifiers. Our results show that SimpleCart was the most effective for the DiScRi data set in classifying CAN. We also investigated and compared the effectiveness of AdaBoost, Bagging, MultiBoost, Stacking, Decorate, Dagging, and Grading, based on Ripple Down Rules as examples of ensemble classifiers. Further, we investigated the effectiveness of these ensemble methods as a function of the base classifiers, and determined that Random Forest performed best as a base classifier, and AdaBoost, Bagging and Decorate achieved the best outcomes as meta-classifiers in this setting. Finally, we investigated the meta-classifiers that performed best in their ability to enhance the performance further within the framework of a multi-level classification paradigm. Experimental results show that the multi-level paradigm performed best when Bagging and Decorate were combined in the construction of a multi-level ensemble classifier

    An efficient approach based on trust and reputation for secured selection of grid resources

    Full text link
    Security is a principal concern in offering an infrastructure for the formation of general-purpose computational grids. A number of grid implementations have been devised to deal with the security concerns by authenticating the users, hosts and their interactions in an appropriate fashion. Resource management systems that are sophisticated and secured are inevitable for the efficient and beneficial deployment of grid computing services. The chief factors that can be problematic in the secured selection of grid resources are the wide range of selection and the high degree of strangeness. Moreover, the lack of a higher degree of confidence relationship is likely to prevent efficient resource allocation and utilisation. In this paper, we present an efficient approach for the secured selection of grid resources, so as to achieve secure execution of the jobs. This approach utilises trust and reputation for securely selecting the grid resources. To start with, the self-protection capability and reputation weightage of all the entities are computed, and based on those values, the trust factor (TF) of all the entities are determined. The reputation weightage of an entity is the measure of both the user&rsquo;s feedback and other entities&rsquo; feedback. Those entities with higher TF values are selected for the secured execution of jobs. To make the proposed approach more comprehensive, a novel method is employed for evaluating the user&rsquo;s feedback on the basis of the existing feedbacks available regarding the entities. This approach is proved to be scalable for an increased number of user jobs and grid entities. The experimentation portrays that this approach offers desirable efficiency in the secured selection of grid resources

    Managing data using neighbor replication on a triangular-grid structure.

    Get PDF
    Data is one of the domains in grid research that deals with the storage, replication, and management of large data sets in a distributed environment. The all-data-to-all sites replication scheme such as read-one write-all and tree grid structure (TGS) are the popular techniques being used for replication and management of data in this domain. However, these techniques have its weaknesses in terms of data storage capacity and also data access times due to some number of sites must ‘agree’ in common to execute certain transactions. In this paper, we propose the all-data-to-some-sites scheme called the neighbor replication on triangular grid (NRTG) technique by considering only neighbors have the replicated data, and thus, minimizes the storage capacity as well as high update availability. Also, the technique tolerates failures such as server failures, site failure or even network partitioning using remote procedure call (RPC)

    An adaptive behavioral-based incremental batch learning malware variants detection model using concept drift detection and sequential deep learning

    Get PDF
    Malware variants are the major emerging threats that face cybersecurity due to the potential damage to computer systems. Many solutions have been proposed for detecting malware variants. However, accurate detection is challenging due to the constantly evolving nature of the malware variants that cause concept drift. Existing malware detection solutions assume that the mapping learned from historical malware features will be valid for new and future malware. The relationship between input features and the class label has been considered stationary, which doesn't hold for the ever-evolving nature of malware variants. Malware features change dynamically due to code obfuscations, mutations, and the modification made by malware authors to change the features' distribution and thus evade the detection rendering the detection model obsolete and ineffective. This study presents an Adaptive behavioral-based Incremental Batch Learning Malware Variants Detection model using concept drift detection and sequential deep learning (AIBL-MVD) to accommodate the new malware variants. Malware behaviors were extracted using dynamic analysis by running the malware files in a sandbox environment and collecting their Application Programming Interface (API) traces. According to the malware first-time appearance, the malware samples were sorted to capture the malware variants' change characteristics. The base classifier was then trained based on a subset of historical malware samples using a sequential deep learning model. The new malware samples were mixed with a subset of old data and gradually introduced to the learning model in an adaptive batch size incremental learning manner to address the catastrophic forgetting dilemma of incremental learning. The statistical process control technique has been used to detect the concept drift as an indication for incrementally updating the model as well as reducing the frequency of model updates. Results from extensive experiments show that the proposed model is superior in terms of detection rate and efficiency compared with the static model, periodic retraining approaches, and the fixed batch size incremental learning approach. The model maintains an average of 99.41% detection accuracy of new and variants malware with a low updating frequency of 1.35 times per month

    Secure accounting and payment infrastructure for grid computing

    Full text link
    In this paper, we propose an architecture of accounting and payment services for service oriented grid computing systems. The proposed accounting and payment services provide the mechanisms for service providers to be paid for authorized use of their resources. It supports the recording of usage data, secure storage of that data, analysis of that data for purposes of billing and so forth. It allows a variety of payment methods, it is scalable, secure, convenient, and reduce the overall cost of payment processing while taking into account, requirements of Grid computing systems.<br /

    Dynamic parallel job scheduling in multi-cluster computing systems

    Full text link
    Job scheduling is a complex problem, yet it is fundamental to sustaining and improving the performance of parallel processing systems. In this paper, we address an on-line parallel job scheduling problem in heterogeneous multi-cluster computing systems. We propose a new space-sharing scheduling policy and show that it performs substantially better than the conventional policies
    corecore